Reversible jump MCMC

نویسندگان

  • Peter J. Green
  • David I. Hastie
چکیده

Statistical problems where ‘the number of things you don’t know is one of the things you don’t know’ are ubiquitous in statistical modelling. They arise both in traditional modelling situations such as variable selection in regression, and in more novel methodologies such as object recognition, signal processing, and Bayesian nonparametrics. All such ‘trans-dimensional’ problems can be formulated generically, sometimes with a little ingenuity, as a matter of joint inference about a model indicator k and a parameter vector θk, where the model indicator determines the dimension nk of the parameter, but this dimension varies from model to model. Almost invariably in a frequentist setting, inference about these two kinds of unknown is based on different logical principles, but, at least formally, the Bayes paradigm offers the opportunity of a single logical framework – it is the joint posterior π(k, θk|Y ) of model indicator and parameter given data Y that is the basis for inference. Reversible jump Markov chain Monte Carlo (Green, 1995) is a method for computing this posterior distribution by simulation, or more generally, for simulating from a Markov chain whose state is a vector whose dimension is not fixed. It has many applications other than in Bayesian statistics. Much of what follows will apply equally to them all; however, for simplicity, we will use the Bayesian motivation and terminology throughout. The joint inference problem can be set naturally in the form of a simple Bayesian hierarchical model. We suppose that a prior p(k) is specified over models k in a countable set K, and for each k we are given a prior distribution p(θk|k), along with a likelihood L(Y |k, θk) for the data Y . For simplicity of exposition, we suppose that p(θk|k) is a probability density, and that there are no other parameters, so that where there are parameters common to all models these are subsumed into each θk ∈ Xk ⊂ Rnk . Additional parameters, perhaps in additional layers of a hierarchy, are easily dealt with. Note that in this chapter, all probability distributions are proper. In some settings, p(k) and p(θk|k) are not separately available, even up to multiplicative constants; this applies for example in many point process models. However it will be clear that what follows requires specification only of the product p(k, θk) = p(k)×p(θk|k) of these factors, up to a multiplicative constant. In many models there are discrete unknowns as well as continuously distributed ones. Such unknowns, whether fixed or variable in number, cause no additional difficulties; only discrete-state Markov chain notions are needed to

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the automatic choice of reversible jumps

The major implementational problem for reversible jump MCMC is that there is commonly no natural way to choose jump proposals since there is no Euclidean structure to guide our choice. In this paper we will consider a mechanism for guiding the proposal choice by analysis of acceptance probabilities for jumps. Essentially the method involves an approximation for the acceptance probability around...

متن کامل

Improving the Acceptance Rate of Reversible Jump Mcmc Proposals

Recent articles have commented on the difficulty of proposing efficient reversible jump moves within MCMC. We suggest a new way to make proposals more acceptable using a secondary Markov chain to modify proposed moves — at little extra programming cost.

متن کامل

Detection and estimation of signals by reversible jump Markov chain Monte Carlo computations

Markov Chain Monte Carlo (MCMC) samplers have been a very powerful methodology for estimating signal parameters. With the introduction of the reversible jump MCMC sampler, which is a Metropolis-Hastings method adapted to general state spaces, the potential of the MCMC methods has risen to a new level. Consequently, the MCMC methods currently play a major role in many research activities. In thi...

متن کامل

Bayesian Inference on Principal Component Analysis Using Reversible Jump Markov Chain Monte Carlo

Based on the probabilistic reformulation of principal component analysis (PCA), we consider the problem of determining the number of principal components as a model selection problem. We present a hierarchical model for probabilistic PCA and construct a Bayesian inference method for this model using reversible jump Markov chain Monte Carlo (MCMC). By regarding each principal component as a poin...

متن کامل

Convergence Assessment for Reversible Jump MCMC Simulations

In this paper we introduce the problem of assessing convergence of reversible jump MCMC algorithms on the basis of simulation output. We discuss the various direct approaches which could be employed, together with their associated drawbacks. Using the example of fitting a graphical Gaussian model via RJMCMC, we show how the simulation output for models which can be parameterised so that paramet...

متن کامل

A comparison of reversible jump MCMC algorithms for DNA sequence segmentation using hidden Markov models

This paper describes a Bayesian approach to determining the number of hidden states in a hidden Markov model (HMM) via reversible jump Markov chain Monte Carlo (MCMC) methods. Acceptance rates for these algorithms can be quite low, resulting in slow exploration of the posterior distribution. We consider a variety of reversible jump strategies which allow inferences to be made in discretely obse...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995